147 research outputs found

    Homometric sets in trees

    Full text link
    Let G=(V,E)G = (V,E) denote a simple graph with the vertex set VV and the edge set EE. The profile of a vertex set VVV'\subseteq V denotes the multiset of pairwise distances between the vertices of VV'. Two disjoint subsets of VV are \emph{homometric}, if their profiles are the same. If GG is a tree on nn vertices we prove that its vertex sets contains a pair of disjoint homometric subsets of size at least n/21\sqrt{n/2} - 1. Previously it was known that such a pair of size at least roughly n1/3n^{1/3} exists. We get a better result in case of haircomb trees, in which we are able to find a pair of disjoint homometric sets of size at least cn2/3cn^{2/3} for a constant c>0c > 0

    Terminology, diagnostics and therapy of laryngopharyngeal reflux: A glimpse into the past

    Get PDF
    nem

    Robust Submodular Maximization: A Non-Uniform Partitioning Approach

    Get PDF
    We study the problem of maximizing a monotone submodular function subject to a cardinality constraint kk, with the added twist that a number of items τ\tau from the returned set may be removed. We focus on the worst-case setting considered in (Orlin et al., 2016), in which a constant-factor approximation guarantee was given for τ=o(k)\tau = o(\sqrt{k}). In this paper, we solve a key open problem raised therein, presenting a new Partitioned Robust (PRo) submodular maximization algorithm that achieves the same guarantee for more general τ=o(k)\tau = o(k). Our algorithm constructs partitions consisting of buckets with exponentially increasing sizes, and applies standard submodular optimization subroutines on the buckets in order to construct the robust solution. We numerically demonstrate the performance of PRo in data summarization and influence maximization, demonstrating gains over both the greedy algorithm and the algorithm of (Orlin et al., 2016).Comment: Accepted to ICML 201

    Risk factors for coronary heart disease and actual diagnostic criteria for diabetes mellitus

    Get PDF
    Background/Aim. Recent studies indicate that the prevalence of diabetes mellitus (DM) type 2 is increasing in the world. Chronic hyperglycemia in DM is associated with a long term damage, dysfunction and failure of various organs, especially retina, kidney, nerves and, in addition, with an increased risk of cardiovasclar disease. For a long time the illness has been unknown. Early diagnosis of diabetes could suspend the development of diabetic complications. The aim of the study was to establish risk for the development of coronary disease in the patients evaluated by the use of new diagnostic criteria for DM. Methods. The study included 930 participants without diagnosis of DM, hypertension, dyslipidemia, nor coronary heart disease two years before the study. The patients went through measuring of fasting plasma glycemia, erythrocytes, hematocrit, cholesterol, triglycerides, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol, aspartate aminotransferase and alanine aminotransferase. In the group with hyperglycemia the 2-hour oral glucose tolerance test was performed. We analyzed ECG and made blood pressure monitoring, and also measured body mass, height, waist and hip circumference. We analyzed life style, especially smoking, and exercise and family history. Results. Diabetes prevalence was 2.68%, and prevalences of impaired fasting glucose, impaired glucose tolerance and DM were 12.15%. Average age of males and females was 38 and 45 years, respectively. In the healthy population there was higher frequency of smokers (55% vs 42%), but in the population with hyperglycemia there were more obesity (23% vs 10.5%), hypertension (39% vs 9%), hypercholesterolemia (76% vs 44.1%), lower HDL-C (52.2% vs 25.7%). Cummulative risk factor in healthy subjects, and those with hyperglycemia were 5.6% and 14%, respectively. Conclusion. Subjects with hyperglicemia without diagnosis of DM have higher risk factors for coronary heart disease

    The voice of patients with laryngeal carcinoma after oncosurgery

    Get PDF
    The voice of patients indicated for surgical procedures in treating of dysphonia is already damaged before the operation. The problem, which exists at the level of glottis patients usually try to solve by compensative mechanisms. The quality of voice after the interventions in larynx depends on the type and width of resection, disturbance of physiological phonation mechanisms, and ability to establish optimal phonation automatism. The damage of laryngeal structure, especially its glottic part and vocal cords as its central part, no matter if they are just fibrous or they are partially or totally absent, leads into the development of substitutive phonation mechanisms. The most frequent substitutive mechanisms are: vestibular, ventricular, and chordoventricular phonation. There are some variations of these phonation mechanisms, which are conditioned not only by applied surgical technique, but as they are also individual characteristics, they can be the consequence of applied rehabilitation methods. The diagnosis of voice condition before and after the oncosurgical procedure is done by: laryngostroboscopy, subjective acoustic analysis of voice, and objective acoustic analysis of voice (sonography or computer analysis of acoustic signal). The most of laryngeal carcinomas appear in glottic region, so the function of phonation imposes itself as the objective parameter to measure the quality of life after the oncosurgery of larynx. That is the reason why according to the priority, it is just behind the principle of "oncologic radicalism". Phonation as the most complex laryngeal function seems to have secondary importance. All known operative techniques, especially partial resections, have the preservation of phonation as their goal

    Connectivity: An Ecological Paradigm for the Study of Bronze Age

    Full text link
    “Connectivity: an ecological paradigm for the study of Bronze Age” addresses the relationship between historic and prehistoric people, and the landscapes they inhabited, moved about, and continue to inhabit. It suggests alternative methodological approaches that have broader ramifications for the discipline of (Bronze Age) archaeology. By engaging the code and innovations stemming from ecology and digital technology, the research questions concern the interface – referred to as connectivity – between the archaeological sites, resources, networks of communication, and the conditions of archaeological knowledge acquisition. Drawing on published and new data, the aim of the project is to put forward a strategy for a geographically and linguistically inclusive research of the Bronze Age Collapse, analyzing landscape connectivity that does not promote culture as a common denominator of archaeological data sets. Topics that are explored: archaeometallurgy, environmental pressures, mobility, pottery analysis - can be distilled to the issue of scalability of archaeological scholarship. The narrower case study focuses on the southeastern Europe 1650-1100 BCE

    Massively Parallel Algorithms for Distance Approximation and Spanners

    Full text link
    Over the past decade, there has been increasing interest in distributed/parallel algorithms for processing large-scale graphs. By now, we have quite fast algorithms -- usually sublogarithmic-time and often poly(loglogn)poly(\log\log n)-time, or even faster -- for a number of fundamental graph problems in the massively parallel computation (MPC) model. This model is a widely-adopted theoretical abstraction of MapReduce style settings, where a number of machines communicate in an all-to-all manner to process large-scale data. Contributing to this line of work on MPC graph algorithms, we present poly(logk)poly(loglogn)poly(\log k) \in poly(\log\log n) round MPC algorithms for computing O(k1+o(1))O(k^{1+{o(1)}})-spanners in the strongly sublinear regime of local memory. To the best of our knowledge, these are the first sublogarithmic-time MPC algorithms for spanner construction. As primary applications of our spanners, we get two important implications, as follows: -For the MPC setting, we get an O(log2logn)O(\log^2\log n)-round algorithm for O(log1+o(1)n)O(\log^{1+o(1)} n) approximation of all pairs shortest paths (APSP) in the near-linear regime of local memory. To the best of our knowledge, this is the first sublogarithmic-time MPC algorithm for distance approximations. -Our result above also extends to the Congested Clique model of distributed computing, with the same round complexity and approximation guarantee. This gives the first sub-logarithmic algorithm for approximating APSP in weighted graphs in the Congested Clique model

    Streaming Robust Submodular Maximization: A Partitioned Thresholding Approach

    Get PDF
    We study the classical problem of maximizing a monotone submodular function subject to a cardinality constraint k, with two additional twists: (i) elements arrive in a streaming fashion, and (ii) m items from the algorithm's memory are removed after the stream is finished. We develop a robust submodular algorithm STAR-T. It is based on a novel partitioning structure and an exponentially decreasing thresholding rule. STAR-T makes one pass over the data and retains a short but robust summary. We show that after the removal of any m elements from the obtained summary, a simple greedy algorithm STAR-T-GREEDY that runs on the remaining elements achieves a constant-factor approximation guarantee. In two different data summarization tasks, we demonstrate that it matches or outperforms existing greedy and streaming methods, even if they are allowed the benefit of knowing the removed subset in advance.Comment: To appear in NIPS 201

    New Partitioning Techniques and Faster Algorithms for Approximate Interval Scheduling

    Get PDF
    Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results. For (1+ε)(1+\varepsilon)-approximation of job scheduling of nn jobs on a single machine, we obtain a fully dynamic algorithm with O(lognε)O(\frac{\log{n}}{\varepsilon}) update and O(logn)O(\log{n}) query worst-case time. Further, we design a local computation algorithm that uses only O(lognε)O(\frac{\log{n}}{\varepsilon}) queries. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we obtain a fully dynamic algorithm whose worst-case update and query time has only polynomial dependence on 1/ε1/\varepsilon, which is an exponential improvement over the result of Henzinger et al. [SoCG, 2020]. We extend our approaches for unweighted interval scheduling on a single machine to the setting with MM machines, while achieving the same approximation factor and only MM times slower update time in the dynamic setting. In addition, we provide a general framework for reducing the task of interval scheduling on MM machines to that of interval scheduling on a single machine. In the unweighted case this approach incurs a multiplicative approximation factor 21/M2 - 1/M
    corecore